Skip to main content

The Algernon Argument

Why most supplements fail: IQ improvement skepticism, Yudkowsky & Bostrom’s heuristics, nootropics

That intelligence (g) in healthy people is nearly impossible to improve is clear from the failure of psychology to provide any such method. But why intelligence would be so constant is not as clear: many other cognitive abilities are improvable (like working memory), so why not intelligence?

Arthur Jensen noted the failure of interventions in the 1960s, and the failure remains complete now, half a century later: if you are a bright healthy young man or woman gifted with an IQ in the 130s, there is nothing you can do to increase your underlying intelligence by a standard deviation. New methods like dual n-back or nootropics are trumpeted in the media, and years later are discovered to increase motivation & not intelligence, or to have been overstated, or work only in damaged or deficient subpopulations, or to be statistical/methodological artifacts, or to be tantamount to training on IQ tests themselves which destroys their meaning (like memorizing vocabulary), or to be so anomalous as to verge on fraudulent (like the Pygmalion effect). The only question worth asking is which of these explanations is the real explanation this time.

For IQ in particular, people discussing human-enhancement (especially transhumanists) have proposed a pessimistic observation & evolutionary explanation, dubbed the “Algernon principle” or “Algernon’s law” or my preference, the “Algernon Argument”.

Algernon Argument

The famous SF story “Flowers for Algernon” postulates surgery which triples the IQ score of the retarded protagonist - but which comes with the devastating side-effects of the gain being both temporary and sometimes fatal; fictional evidence aside, it is curious that despite the incredible progress mankind has made in countless areas like building cars or going to the moon or fighting cancer or extincting smallpox or inventing computers or artificial intelligence, we lack any meaningful way to positively affect people’s intelligence beyond curing diseases & deficiencies. If we compare the smartest people in the world now like Terence Tao to the smartest people of more than half a century ago like John von Neumann, there seems to be little difference. Eliezer Yudkowsky expands the thought out in his essay “Algernon’s Law”, stating it as:

Any simple major enhancement to human intelligence is a net evolutionary disadvantage.

The lesson is that Mother Nature know best. Or alternately, TANSTAAFL: “there ain’t no such thing as a free lunch”.

Trade-offs are endemic in biology. Anything which isn’t carrying its own weight will be eliminated - organs which are no longer used will be stunted by evolution and within a lifetime, unused muscles & bones will start weakening or being scavenged for resources, as athletes1 and astronauts may find out the hard way2 and bodybuilders perpetually fight3, while shrews cyclically shrink their brains & skulls by 15% to conserve resources in winter (Lázaro et al 2017). Often, if you use a drug or surgery to optimize something, you will discover penalties elsewhere. If you delay aging & length lifespan as is possible in many species, you might find that you have encouraged cancer or - still worse - decreased reproduction4 as evidenced by the dramatic deaths of salmon or brown antechinus5; if your immune system goes all-out against disease, you either deplete your energetic and chemical reserves6 or risk autoimmune disorders; similarly, we heal much slower than seems possible despite the clear advantage7; if you try to enhance attention with an amphetamine, you destroy creativity, or if the amphetamines reduce sleep, you damage memory consolidation or peripheral awareness8; or improving memory (which requires active effort to maintain9) also increases sensitivity to pain10 and interferes with other mental tasks1112 (as increased WM does, slightly13); if a mouse invests in anti-aging cellular repairs, it may freeze to death14, and so on. (What are we to make of inducing savant-like abilities by brute-force suppression of brain regions15, or tDCS improving learning?) From this perspective, it’s not too surprising that human medicine may be largely wasted effort or harmful16 (although most - especially doctors - would strenuously deny this). “Hardly any man is clever enough to know all the evil he does.”17

An analogy to complex systems is a superficial analysis at best. Many complex systems are routinely optimizable on some parameter of interest by orders of magnitudes, or at least factors. Economies grow exponentially, on the back of all sorts of improving performance curves which make us richer than emperors compared with our ancestors; the miracle of economic growth, built on thousands of distinct complex systems being optimized by humans, seems to go unnoticed and be so normal and taken for granted. If we were computers, an ordinary nerd with access to some liquid nitrogen could double our clock speed.

With intelligence, on the other hand, not only do we have no interventions to make one an order of magnitude smarter on some hypothetical measure of absolute intelligence (perhaps such a man would be to us as we are to dogs?) but we have no interventions which make one a few factors smarter (the smartest man to ever live?) nor do we even have any interventions which can move one more than a few percentage points up in the general population! We remain the same. It is as if scientists and doctors, after studying cars for centuries, shamefacedly had to admit that their thousands of experimental cars all still had their speed throttles stuck on 25-30kph - but the good news was that this new oil additive might make a few of the cars run 0.1kph faster!

This is not the usual state of affairs for even extremely complex systems. This raises the question of why all these cars are so uniformly stuck at a certain top speed and how they got to be so optimized; why are we like these fantastical cars, and not computer processors?

Costs

Intelligence is an almost unalloyed good when we look at correlations in the real-world for income, longevity, happiness, contributions to science or medicine, criminality, favoring of free speech etc.18. Why is it, then, that we can find quotes like “the rule that human beings seem to follow is to engage the brain only when all else fails - and usually not even then”19 or “In effect, all animals are under stringent selection pressure to be as stupid as they can get away with”20? Why does so much psychological research, especially heuristics & biases, seem to boil down to a dichotomy of a slow accurate way of thinking and a fast less-accurate way of thinking (“system I” vs “system II” being just one of the innumerable pairs coined by researchers21)?

Because thinking is expensive and slow, and while it may be an unalloyed good, it is subject to diminishing returns like anything else (if it is profitable at all in a particular niche: no bacteria needs sophisticated cognitive skills, nor most mammals) and other things become more valuable. Just another tradeoff.

EOC

In “The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement”22 (Human Enhancement 200818ya), Nick Bostrom and Anders Sandberg put this principle as a question or challenge, “evolutionary optimality challenge” (EOC):

If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?

We could take this as a biological version of Chesterton’s fence. Evolution is a massively parallel search process which has been running on humans (and predecessor organisms) for billions of years, ruthlessly optimizing for reproductive fitness. It is an immensely stupid and blind idiot god which will accomplish its goal by any available means, and if that means evolutionary mechanisms will cause individuals to drive their own species extinct because this was the fittest thing for each individual to do or if the highly artificial & unlikely conditions for group selection are enforced by an experimenter and evolution causes group norms of mass infanticidal cannibalization to develop, so be it!

It is of course possible for a new mutation to be fitter or for the environment to change and render some alternative more fit. This is sometimes true, but it is overwhelmingly usually false. If you do not believe me, feel free to go try to beat just the stock market, or if you’re up for a challenge, be more reproductively fit in a tuna fish’s niche than the tuna fish. Every so often you hear of a hedge fund which found alpha, or of an invading alien species that is beating the native flora: but remember that they are just one such fund or species out of thousands and thousands trying. Francis Bacon:

It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods, - “Aye”, asked he again, “but where are they painted that were drowned after their vows?” And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by.

They are exceptions which prove the rule; they are, in fact, the exceptions which cause the rule to be true, by exploiting the niche or opportunity. Suppose we turned out to be harmfully miserly with calories and there is some receptor (such as those acted upon by stimulants like caffeine or nicotine) which triggers a cascade of changes leading to behavior which is a superior tradeoff in caloric consumption vs activity. Evolution would slowly increase the market-share of alleles which affect this receptor, and after a while, the new level of activity would become optimal and now use of a stimulant affecting the receptor would cease to be fit because it cranks it too high. There may be some such opportunities available to humans today, since we know of past opportunities like adult lactose tolerance which have been sweeping through gene pools over the past thousand years, but can we really claim that all the interventions in which we differ from our distant ancestors can be traced to such reproductive fitness justifications? (And people think evolutionary psychology overreaches and speculates without evidence!) Theoretical calculations apparently indicate that in a changing environment, the “reproductive fitness gap” between the current allele and its alternatives will be small and large gaps exponentially rare23; this seems intuitive - to continue the market analogy, the bigger the arbitrage, the faster it will be exploited.

Obviously we humans do intervene all the time, and many of those interventions are worthwhile. Women, for example, are big fans of birth control, and if the female reproductive system isn’t controlled by evolution, nothing is. How are we to reconcile the theoretical expectation that we should find it nigh-impossible to beat evolution at its own game with the observed fact that we seem to intervene successfully all the time?

Loopholes

What a book a devil’s chaplain might write on the clumsy, wasteful blundering, low and horribly cruel works of nature!

Charles Darwin, 1856-07-13 letter to J.D. Hooker, More Letters of Charles Darwin, Volume 1

It is a profound truth—realized in the nineteenth century by only a handful of astute biologists and by philosophers hardly at all (indeed, most of those who held any views on the matter held a contrary opinion)—a profound truth that Nature does not know best; that genetical evolution, if we choose to look at it liverishly instead of with fatuous good humor, is a story of waste, makeshift, compromise and blunder.

Sir Peter Medawar, “The Future of Man” (195967ya)

There may be no free lunches, but might there be some cheap lunches? Yudkowsky’s formulation points out several ways to escape the argument:

  1. interventions may not be simple

    So one might find major enhancements through some very complex surgery or prosthetic; perhaps brain implants which expand memory or enable (controlled) wireheading. Evolution is a search procedure for finding local optima, which are not necessarily global optima. Examples like the giraffe’s nerves demonstrates such traps, but how would evolution fix them? Even if a mutation sudden made the nerve go the shorter direction, it’s not clear what other changes would have to be made to deal with this improvement, and this combination of multiple rare mutations may not happen enough times for the small reproductive fitness improvement (less resources used on nerves) to make it to fixation.

  2. the simple interventions may not lead to a major enhancement

    Nutritional supplements are examples; it makes perfect sense that fixing a chemical deficiency could be a simple matter and enhance reproductive fitness - but one would expect only minor mental enhancements and this effect would not generalize to very many people. (Similarly, most nootropics do not do very much.)

  3. the intervention may be simple, give major enhancements, but result in a net loss of reproductive fitness

    The famous Ashkenazi theory of intelligence comes to mind. According to this theory, the Ashkenazi were forced into occupations demanding intelligence, and micro-selected for high intelligence. Except the high IQ genes were not previously prevalent among either Jews or gentiles because - like sickle-cell anemia - when they became too prevalent, they result in horrible diseases like Tay-Sachs. In 200719ya, a unique mutation in a Scottish family was found to increase verbal IQ in afflicted family members vs non by something like 25 points; this would be great for them - except for how that mutation starts causing blindness in one’s 20s or later. (In general, it’s much easier to find mutations or other genetic changes breaking intelligence than helping in cases of retardation24 and autism25.)

Bostrom also offers 3 categories of ways in which interventions can escape his ‘EOC’:

  1. Changed Tradeoffs. Evolution ‘designed’ the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment. It is not surprising, then, that we might be able to modify the system better to meet the demands imposed on it by the new environment.26

  2. Value Discordance. There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply. Even if evolution had managed to build the finest reproduction-and-survival machine imaginable, we may still have reason to change it because what we value is not primarily to be maximally effective inclusive-reproductive-fitness optimizers.

  3. Evolutionary Restrictions. We have access to various tools, materials, and techniques that were unavailable to evolution. Even if our engineering talent is far inferior to evolution’s, we may nevertheless be able to achieve certain things that stumped evolution, thanks to these novel aids.

An example of how not to escape the EOC, I believe, is offered in “The Likelihood of Cognitive Enhancement” (Lynch et al 2012), when the authors attempt to argue that powerful nootropics are possible:

But perhaps the ‘room for improvement’ issue can be recast in terms of brain evolution by asking whether comparative anatomical evidence points to strong adaptive pressures for designs that are logically related to improved cognitive performance. Anatomists often resort to allometry when dealing with questions of selective pressures on brain regions. Applied to brain proportions, this involves collecting measurements for the region of interest - eg. frontal cortex – for a series of animals within a given taxonomic group and then relating it to the volume or weight of the brains of those animals. This can establish with a relatively small degree of error whether a brain component in a particular species is larger than would be predicted from that species’ brain size. While there is not a great deal of evidence, studies of this type point to the conclusion that cortical subdivisions in humans, including association regions, are about as large as expected for an anthropoid primate with a 1350cc brain. The volume of area 10 of human frontal cortex, for example, fits on the regression line (area 10 versus whole brain) calculated from published data (Semendeferi et al 2001) for a series composed of gibbons, apes and humans (Lynch and Granger, 200818ya). Given that this region is widely assumed to play a central role in executive functions and working memory, these observations do not encourage the idea that selective pressures for cognition have differentially shaped the proportions of human cortex. Importantly, this does not mean that those proportions are in any sense typical. The allometric equations involve different exponents for different regions, meaning that absolute proportions (eg. primary sensory cortex versus association cortex) change as brains grow larger. The balance of parts in the cortex of the enormous human brain is dramatically different than found in the much smaller monkey brain: area 10, for instance, occupies a much greater percentage of the cortex in man. But these effects seem to reflect expansion according to rules embedded in a conserved brain plan rather than selection for the specific pattern found in humans (Finlay et al 2001).

…But our argument here is that these expanded cortical areas are likely to use generic network designs shared by most primates; if so, then it appears unlikely that the designs are in any sense ‘optimized’ for cognition. We take this as a starting position for the assumption that the designs are far from being maximally effective for specialized human functions, and therefore that it is realistic to expect that cognition-related operations can be significantly enhanced.

I would agree that the human brain’s architecture does not seem to be optimal in any universal sense; and that this would constitute an interesting argument if one were arguing that artificial intelligences will not inherently be limited to a level of intelligence comparable to the greatest human geniuses.

However, this does not offer hope for nootropics because the human brain can easily be suboptimal in its gross anatomical architecture but close to optimal in any factor easily tweaked by chemicals! (A suggestion that brain region size is suboptimal is a suggestion only that a large change in brain region size might lead to large gains - but large changes are neither easy, simple, nor possible currently.)

Examples

Teleology is like a mistress to a biologist: he cannot live without her but he’s unwilling to be seen with her in public.27

Bostrom’s criteria are more general, so we’ll use them.

Birth control is a clear example of satisfying loophole #2, ‘value discordance’. Ovulation is under the body’s control and is linked in evolutionary psychology to many changes in behavior; unprocreative sex is common throughout the animal kingdom where it serves other purposes like forming social connections in bonobo troupes. Hunter-gatherer women practice spaced births by letting their child suckle them for years; maternal cannibalism has been observed when mothers are under particular stress (and perhaps also in humans). So, it’s clear that there is birth control capability already available to hominids, and not too surprising that it’s possible to render a healthy woman entirely infertile without major health consequences. Many women would prefer evolution have done just this! They do not value having a dozen children while young; they would rather have just 2 at a time of their choosing - if any at all. Why is evolution not so obliging? Well, it obviously would not be very reproductively fit…

Pacemakers are an example of #3: evolution couldn’t afford to engineer more reliable hearts, in part for lack of electronic microchips and possibly because humans are already at the limits of the performance envelope28.

Many traits related to nutrition fall into the category of #1.29

How about supplements? Most supplements are just tweaking biochemical processes, and don’t obviously fall under 1, 2, or 3; and the few which seem to enhance healthy humans are finicky creatures (see my introduction to Nootropics). Melatonin, for example, may seem particularly questionable as one’s body secretes considerable quantities in an intricate cycle (but see later).

Flynn Effect

The Flynn effect is a possible counter-example: it operates broadly over many countries, improves average IQ by perhaps 10 or more points over the last century30, presumably is environmental, and operates without any explicit expensive eugenics programs or anything like that.

However, there are several ways in which the Flynn effect respects Algernon’s argument and passes the loopholes:

  1. the Flynn effect is limited in its gains and so will result in not major gains

    the Flynn effect has already stopped in 3 of the wealthiest countries and reversed to some degree. The situation in the US is unclear, but given the outright losses in verbal & science skills seen 198129201016ya in the most intelligent of Midwestern students31, this is consistent with a Flynn effect operating through eliminating deficiencies & improving the lower end or with a Flynn effect that has ceased to exist

  2. the Flynn effect is apparently environmental, and one of the most plausible explanations is that it is due to either nutritional deficits or public health interventions against infectious diseases.

    In neither case are interventions ‘easy’ in any sense, nor are the interventions available to evolution - if one’s diet is lacking in an essential element like iodine deficiency, evolution cannot simply conjure it away; nor can it invent any better immune systems than it already has as part of the usual arms race with infectious agents. As we already noted, we could expect nutritional interventions to produce small benefits, and we might expect that implementing a whole battery of possible improvements (iodine deficiency, iron deficiency, a dozen childhood infections etc.) to produce much what we see with the Flynn effect. But we would expect the gains to be specific and quickly exhausted once the low-hanging fruit is exhausted. (There cannot be indefinitely many deficiencies and infections!) This too is what we observe with the halting of the Flynn effect.

  3. The intelligence gains from the Flynn effect may not be reproductive-fitness-increasing; IQ correlates strongly with many desirable things like income, happiness, knowledge, education, etc. - but not having more than average children. The correlations are found both in the West and worldwide. (It is of course possible that the Flynn effect causes IQ gains and reproductive fitness increases on the lower end of the spectrum and high IQ is intrinsically reproductive-fitness-reducing in the modern environment, but the observation is suggestive.)

  4. the Flynn effect does not actually reflect intelligence gains but damage to the validity of the subtest in which the gains appear, and is irrelevant

Piracetam

Or in “Growing up is hard”, Eliezer Yudkowsky remarks that Bostrom’s EOC is:

…one reason to be wary of, say, cholinergic memory enhancers [such as piracetam]: if they have no downsides, why doesn’t the brain produce more acetylcholine already? Maybe you’re using up a limited memory capacity, or forgetting something else…

Let’s consider the specific case of piracetam. Piracetam is so old and has so many studies on its efficacy (real if not substantial) and safety (utterly) that it screens off a lot of secondary considerations.

  • Might piracetam escape the EOC with #3?

    No. Whatever receptors or buttons piracetam pushes could already be pushed by the brain the usual way. There is nothing novel about piracetam in that sense.

  • Might piracetam escape the EOC with #2?

    Perhaps. Hard to see how piracetam trades off reproductive fitness for something else, though. Since its synthesis in 196462ya, minimal side-effects or other safety issues have been noted, unlike other drugs such as caffeine or aspirin.

  • Might piracetam escape the EOC with #1?

    Probably. Many tradeoffs are different in contemporary First World countries than in the proverbial Stone Age veldt. We should look more closely at what piracetam does and what tradeoffs it may be changing.

A ‘cholinergic’ operates by encouraging higher levels of the acetylcholine neurotransmitter; acetylcholine is one of the most common neurotransmitters. If serotonin is loosely associated with mood, we might say that acetylcholine is loosely associated with the ‘velocity’ of thoughts in the brain. If one is using more acetylcholine, one needs to create more acetylcholine (the brain cannot borrow indefinitely like the US federal government). Acetylcholine is made out of the essential nutrient choline.

An interesting thing about piracetam use is that it doesn’t do very much by itself32. It is charitably described as ‘subtle’. The standard advice is to take a choline supplement with the piracetam: a gram of soy lecithin, choline bitartrate, or choline citrate.

Isn’t this interesting? Presumably we are not Irish peasants consuming wretched diets of potato, potato, and more potato, with some mutton on the holidays. We are cognizant of how a good diet & exercise are prerequisites to brain power. Yet, a gram of straight choline still boosts piracetam’s effects from subtle or placebo, to noticeable & measurable.

This suggests that perhaps a normal First World diet is choline-deficient. If even well-fed humans must economize on choline & acetylcholine, then surely our ancestors, who were worse off nutritionally, had to economize even more severely. Evolution would frown on squandering acetylcholine on idle thoughts like ‘what was that witty saying by Ugh the other day?’ That choline might be needed in the next famine! This suggestion is buttressed by one small mouse experiment:

Administering choline supplementation to pregnant rats improved the performance of their pups, apparently as a result of changes in neural development in turn due to changes in gene expression (Meck et al 1988; Meck & Williams2003; Mellott et al 2004). Given the ready availability of choline supplements, such prenatal enhancement, may already (inadvertently) be taking place in human populations. Supplementation of a mother’s diet during late and 3 months postpartum with long-chained fatty acids has also been demonstrated to improve cognitive performance in human children (Helland et al 200333).34

Past our embryo-hood, we can’t tell our bodies that we have available as much choline as it could possibly need, that we value our synapses blazing at every moment more than a better chance of surviving a famine (which effectively no longer exist). So we have to override it, for our own good.

(It’s worth noting here that there is considerable overlap between #1 and #2. Whether you see piracetam as a conflict in values between evolution’s worst-case planning and our desire for greater average or peak performance, or as a shift in optimal expenditure based on a historical drop in the cost of bulk quantities of choline, is a matter of preference.)

Melatonin

How about melatonin? It is a clear-cut example of failing #3, but perhaps it passes under #1 like piracetam?

A shift worker is an obvious case of value discordance: humans are meant to work mostly during the day, with minimal dangerous night-time activity. Shift workers perversely insist on doing the exact opposite, even struggling against the circadian rhythms (to the detriment of their health). Evolution wots not of your ‘employment contract’, pitiful human!

Regular people have a less extreme version of the shift worker’s dilemma. The modern population doesn’t rise and set with the sun, for imponderable reasons. (My personal theory is widespread akrasia: darkness overcomes hyperbolic discounting and forced the ancients to bed, but we have electric lighting and can stay up indefinitely.) This leads to a values mismatch, and a similar solution.

Modafinil

Modafinil is another drug that seems suspiciously like a free lunch. The side-effects are minimal and rare, and the benefit quite unusual and striking: not needing to sleep for a night. The research on general cognitive benefits is mixed but real35. (My own experience with armodafinil was that after 41 hours of sleep-deprivation, my working memory and focus were actually better than normal as judged by dual n-back scores! An anomaly, but still curious.) Yes, modafinil costs money, but that’s not really relevant to our health or to Evolution. Yes, there is, anecdotally, a risk of coming to tolerate modafinil (although no addiction), but again that doesn’t matter to Evolution - there would still be benefits before the tolerance kicked in.

What heuristic might we use?

  • Chemically, modafinil does not seem to be so bizarre that evolution could not stumble across it or an equivalent mechanism, so probably we cannot appeal to #3, “evolutionary restrictions”. Its mechanism is not very clear, but mostly seems to manipulate things like the histamine system (and to a much lesser extent, dopamine), all things Evolution could easily do.

  • Nor is it clear what value discordance might be involved. We could come up with one, though.

    If one theorized that modafinil came with a memory penalty, inasmuch as memory consolidation and the hippocampus seem to intimately involve sleep, then we might have a discordance where we value being able to produce and act more than being able to remember things. This might even be a sensible tradeoff for a modern man: why not sacrifice some ability to learn or remember long-term, since you can immediately gain back that capacity and more by suitable use of efficient memory techniques like spaced repetition?

  • #1 seems promising. Like piracetam, there is something in short supply that modafinil would use more of: calories! While you are awake, you are burning more calories than while asleep. During the day, synapses build up levels of some proteins, which get wiped out by sleep; is this because synapses and memories are expensive36 and cannot be allowed to consume ever more resources without some sort of ‘garbage collection’, synaptic homeostasis? Fly & rat studies bear out some of the predictions of the model and may lead to interesting new findings37 (see also Bom & Feld2012 discussing Chauvette et al 2012).

    Previously noted was the metabolic cost of defending against infections; one animal study found the proximate cause of death in sleep deprivation to be bacterial infections38. You are also - in the ancient evolutionary environment - perhaps exposing yourself to additional risks in the dark night. (This would be the preservation & protection theory of sleep.)

    Resource usage is a real concern for the human brain, along with scaling issues39: it uses <20% of energy; 87% in infants. One blogger says:

    The human brain is also extremely “expensive tissue” (Aiello & Wheeler1995). Although it only accounts for 2% of an adult’s body weight, it accounts for 20–25% of an adult’s resting oxygen and energy intake (Attwell & Laughlin2001: 1143). In early life, the brain even makes up for up 60–70% of the body’s total energy requirements. A chimpanzee’s brain, in comparison, only consumes about 8–9% of its resting metabolism (Aiello & Wells2002: 330). The human brain’s energy demands are about 8 to 10 times higher than those of skeletal muscles (Dunbar & Shultz2007: 1344), and, in terms of energy consumption, it is equal to the rate of energy consumed by leg muscles of a marathon runner when running (Attwell & Laughlin2001: 1143). All in all, its consumption rate is only topped by the energy intake of the heart (Dunbar & Shultz2007: 1344).

    There are additional disadvantages to increased intelligence - larger heads would drive maternal & infant mortality rates even higher than they are40. And it’s worth noting that while the human brain is disproportionately huge, yet the human cerebral cortex is not any bigger than one would predict be extrapolating from gibbon or ape cortex volumes, despite the human lineage splitting off millions of years ago.41 The human brain seems to be special only in being a scaled-up primate brain42, with close to the metabolic limit in its number of neurons43 (which suggests a resolution to the question why despite convergent evolution of relatively high intelligence44, only primates “took off”). There are other ways in which humans seem to have hit intelligence limits - why did our ancestors’ brains grow in volume for millions of years45, only to come to a halt with the Neanderthals46 & Cro-Magnons and actually start shrinking47 to the modern volume, and why did old age only start increasing 50,000 years ago or later48, well after humans began developing technology like controlled fire (>=400,000 years ago49); or why are primate guts (also resource-expensive) inversely correlated with brain size & in one fish breeding experiment, or muscles starved of sugars and brains favored50; or why do the Ashkenazi seem to pay for their intelligence with endemic genetic disorders51; or why does evolution permit human brains to shrink dramatically with age, as much as 15% of volume, besides the huge performance losses, while the brains of our closest relative-species (the chimpanzees), do not shrink at all?52 For that matter, why are heads, central nervous systems, and primate-level intelligence so extremely rare on the tree of life, with no examples of convergent evolution of intelligence (as opposed to examples like basic eye-spots, which are such a fantastically adaptive tool that they have independently evolved “somewhere between 45 and 60 times”)?53

    The obvious answer is that diminishing returns have kicked in for intelligence in primates and humans in particular54. (Indeed, it’s apparently been argued that not only are humans not much smarter than primates55, but there is little overall intelligence differences in vertebrates56. Humans lose embarrassingly on even pure tests of statistical reasoning; we are outperformed on the Monty Hall problem by pigeons and to a lesser extent monkeys!) The last few millennia aside, humans have not done well and has apparently verged on extinction before, and the demographic transition57 and anthropogenic existential risks suggest that our current success may be short-lived (not that agriculture & civilization were great in the first place). Some psychologists have even tried to make the case that increases in intelligence do not lead to better inferences or choices (Hertwig & Todd2003).

    Modafinil or modafinil-like traits might be selected against due to increased calorie expenditure, decreased calorie consumption, or risks of night-time activity. Either explanation fails in a modern environment; modern societies have murder and assault rates orders of magnitude lower than that seen among aborigines58, and calories are so abundant that they have begun reducing reproductive fitness (we call this poisoning-by-too-many-calories the obesity epidemic).

Is that last a convincing defense of modafinil against the EOC or Algernon’s principle? It seems reasonable to me, if not as strong a defense as I would like.

Heroin

How about opiates? Morphine and other painkillers can easily be justified as evolution not knowing when a knife cut is by a murderous enemy and when it’s by a kindly surgeon (which didn’t exist way back when), and choosing to make us err on the side of always feeling pain. But recreational drug abuse?

  • #1 doesn’t seem too plausible - what about modern society would favor opiate consumption outside of medicinal use? If one wishes to deaden the despair and ennui of living in a degenerate atheistic material culture, we have beer for that.59

  • #3 doesn’t work either; opioids have been around for ages and work via the standard brain machinery.

  • #2 might work here as well, but this dumps us straight into the debate about the War on Drugs and what harm drug use does to the user & society.

But even this analysis is helpful: we now know on what basis to oppose drug use, and most importantly, what kind of evidence which we should look for to support or falsify our belief about heroin.

Ecstasy

MDMA is another popular illicit drug. Reading accounts of early MDMA use or studies on its beneficial psychological properties (a bit like those claimed for previous psychedelics like LSD or psilocybin), one is struck by how fear seems to be a common trait - or rather, the lack of fear:

With Ecstasy, I had simply stepped outside the worn paths in my brain and, in the process, gained some perspective on my life. It was an amazing feeling. Small inconsistencies became obvious. “I need money, I have a $500 motorcycle that I’m too scared to ride, so why not sell it?” So did big psychological ones: “The more angry I am at myself, the more critical I am of my girlfriend. Why should I care how Carol chews her gum?” Ecstasy nudges you to think, very deeply, about one thing at a time. (It wasn’t that harsh LSD feeling, where every thought seems like an absurd paradox - like the fact that we’re all, deep down, just a bunch of monkeys.)..A government-approved study in Spain has just begun in which Ecstasy is being offered to treat rape victims for whom no treatment has worked, based on the premise that MDMA “reduces the fear response to a perceived emotional threat” in therapy sessions. A Swiss study in 199333ya yielded positive anecdotal evidence on its effect on people suffering from post-traumatic stress disorder. And a study in California may soon begin in which Ecstasy is administered to end-stage cancer patients suffering from depression, existential crises and chronic pain. The FDA will be reviewing the protocol for Stage 2 of the trial; results are expected in 200224ya.

Reading, I can’t help but be reminded of the popular self-help practice “rejection therapy” (an exposure therapy), elements of which reappear among businessmen/entrepreneurs, pick up artists, shyness therapists60, nerds, and others: one goes out in public and makes small harmless requests of various strangers until one is no longer uncomfortable or afraid. Eventually one realizes that it is harmless to ask - the worst that will happen is they will say no - and one will presumably be more confident, less fearful, happy, and effective a person. What is the justification for this? After all, one doesn’t regard being afraid of, say, snake venom as a problem and a good reason to undertake a long regimen of Mithridatism! Snake venom is dangerous and should be feared, and deliberating destroying one’s useful fear would be like a mouse doing ‘cat therapy’.

Rejection therapy fans argue that there is a mismatch between fear and reality: our fears and social anxiety are calibrated for the world of a few centuries ago where >90% of the world lived on farms and villagers where a poor reputation & social rejection could mean death; while in the modern world, social rejection is a mere inconvenience because even if one is rejected by one’s extended circle of 150 people there are 100x more people in a small town, and even more thousands of times more people in a city (to say nothing of a megalopolis like New York City where the numbers get vague into the millions). Risk-taking behavior which is optimal in the village will be ludicrously conservative and inefficient in the big city.

If this theory were correct (it is possible but far from proven), and if MDMA worked the same way (unlikely), then we have a clear example of #1, “changed tradeoffs”: we are too risk-averse and fearful of social sanction for a modern environment. (Curiously, this is also a proposed explanation for the apparent increase in psychopathy in modern societies: psychopaths are “defectors” or “hawk” who would normally be suppressed or less fit in a tightly-networked tribe or village, but can thrive in the reputation-poor modern world as they move from place to place and social circle to social circle, leaving behind their victims.61)

Similar Links

[Similar links by topic]